Goto

Collaborating Authors

 causal diagram


Causal Identification under Markov equivalence: Calculus, Algorithm, and Completeness

Neural Information Processing Systems

A plethora of methods was developed for solving this problem, including the celebrated do-calculus [Pearl, 1995]. In practice, these results are not always applicable since they require a fully specified causal diagram as input, which is usually not available.






8fdd149fcaa7058caccc9c4ad5b0d89a-AuthorFeedback.pdf

Neural Information Processing Systems

Point#6clarifiesquestionsin"Correctness".3 1. Aregraphsnecessary? (Q1-2,Q4)The departing point of our work is the realization that an imitating policyis4 generally underdetermined by the observational data alone. For concreteness, consider modelsM1,M2, unknown5 to researchers, where inM1, X U, Y X; inM2, X U, Y X U; inMi,i = 1,2, P(U = 0) =6 P(U = 1) = 0.5. We assume thatY,U are unobserved;Y is the reward. Havingsaidthat,28 our methods could certainly be combined with GAIL to ensure both the causal robustness and the scalability with29 high-dimensional data, which we'llacknowledge inthepaper. R2: (1) A causal diagram containing latent rewardY generalize the traditional settings of imitation learning.


NestedCounterfactualIdentification fromArbitrarySurrogateExperiments

Neural Information Processing Systems

In this paper, we study the identification of nested counterfactuals from an arbitrary combination of observations and experiments. Specifically,building onamore explicit definition ofnested counterfactuals, we prove the counterfactual unnesting theorem (CUT), which allows one to map arbitrary nested counterfactuals to unnested ones.




On Transportability for Structural Causal Bandits

Park, Min Woo, Lee, Sanghack

arXiv.org Machine Learning

Intelligent agents equipped with causal knowledge can optimize their action spaces to avoid unnecessary exploration. The structural causal bandit framework provides a graphical characterization for identifying actions that are unable to maximize rewards by leveraging prior knowledge of the underlying causal structure. While such knowledge enables an agent to estimate the expected rewards of certain actions based on others in online interactions, there has been little guidance on how to transfer information inferred from arbitrary combinations of datasets collected under different conditions -- observational or experimental -- and from heterogeneous environments. In this paper, we investigate the structural causal bandit with transportability, where priors from the source environments are fused to enhance learning in the deployment setting. We demonstrate that it is possible to exploit invariances across environments to consistently improve learning. The resulting bandit algorithm achieves a sub-linear regret bound with an explicit dependence on informativeness of prior data, and it may outperform standard bandit approaches that rely solely on online learning.